深度重新结合因实现最新的机器学习任务而被认可。但是,这些体系结构的出色性能取决于培训程序,需要精心制作以避免消失或爆炸梯度,尤其是随着深度$ l $的增加。关于如何减轻此问题,尚无共识,尽管广泛讨论的策略在于将每一层的输出缩放为$ \ alpha_l $。我们在概率环境中显示标准I.I.D.初始化,唯一的非平凡动力学是$ \ alpha_l = 1/\ sqrt {l} $(其他选择导致爆炸或身份映射)。该缩放因子在连续的时间限制中对应于神经随机微分方程,这与广泛的解释相反,即深度重新连接是神经普通微分方程的离散化。相比之下,在后一种制度中,具有特定相关初始化和$ \ alpha_l = 1/l $获得稳定性。我们的分析表明,与层指数的函数之间的缩放比例和规律性之间存在很强的相互作用。最后,在一系列实验中,我们表现出由这两个参数驱动的连续范围,这在训练之前和之后会共同影响性能。
translated by 谷歌翻译
生成的对抗网络后面的数学力量提高了具有挑战性的理论问题。通过表征产生的分布的几何特性的重要问题,我们在有限的样本和渐近制度中对Wassersein Gans(WGAN)进行了彻底分析。我们研究了潜伏空间是单变量的特定情况,并且不管输出空间的尺寸如何有效。我们特别地显示出用于固定的样本大小,最佳WGAN与连接路径紧密相连,最小化采样点之间的平方欧几里德距离的总和。我们还强调了WGAN能够接近的事实(对于1-Wasserstein距离)目标分布,因为样本大小趋于无穷大,在给定的会聚速率下,并且提供了生成的Lipschitz函数的家族适当地增长。我们在半离散环境中获得了在最佳运输理论上传递新结果。
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
近年来,神经科学家一直对脑部计算机界面(BCI)设备的开发感兴趣。患有运动障碍的患者可能会受益于BCIS作为通讯手段和恢复运动功能。脑电图(EEG)是评估神经元活性的最常用之一。在许多计算机视觉应用中,深度神经网络(DNN)都具有显着优势。为了最终使用DNN,我们在这里提出了一个浅神经网络,该网络主要使用两个卷积神经网络(CNN)层,其参数相对较少,并且快速从脑电图中学习光谱时期特征。我们将该模型与其他三个神经网络模型进行了比较,其深度不同于精神算术任务,该模型使用了针对患有运动障碍的患者和视觉功能下降的患者进行的眼神闭合状态。实验结果表明,浅CNN模型的表现优于所有其他模型,并达到了90.68%的最高分类精度。处理跨主题分类问题也更加健壮:准确性的标准偏差仅为3%,而不是传统方法的15.6%。
translated by 谷歌翻译
我们对数据驱动的需求工程,尤其是对用户评论的考虑。这些在线评论是提取新需求和改进请求的丰富信息来源。在这项工作中,我们使用Camembert提供了自动分析,Camembembert是法语中最先进的语言模型。我们从健康与健身领域的三个应用程序中创建了一个由6000个用户评论的多标签分类数据集。结果令人鼓舞,并建议可以自动识别有关新功能请求的评论。数据集可在以下网址获得:https://github.com/jl-wei/apia2022-french-user-reviews-classification-dataset。
translated by 谷歌翻译
We study the connection between the highly non-convex loss function of a simple model of the fully-connected feed-forward neural network and the Hamiltonian of the spherical spin-glass model under the assumptions of: i) variable independence, ii) redundancy in network parametrization, and iii) uniformity. These assumptions enable us to explain the complexity of the fully decoupled neural network through the prism of the results from random matrix theory. We show that for large-size decoupled networks the lowest critical values of the random loss function form a layered structure and they are located in a well-defined band lower-bounded by the global minimum. The number of local minima outside that band diminishes exponentially with the size of the network. We empirically verify that the mathematical model exhibits similar behavior as the computer simulations, despite the presence of high dependencies in real networks. We conjecture that both simulated annealing and SGD converge to the band of low critical points, and that all critical points found there are local minima of high quality measured by the test error. This emphasizes a major difference between large-and small-size networks where for the latter poor quality local minima have nonzero probability of being recovered. Finally, we prove that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting.
translated by 谷歌翻译